[Cpp API Compatibility] Complement ScalarType#78581
[Cpp API Compatibility] Complement ScalarType#78581SigureMo merged 3 commits intoPaddlePaddle:developfrom
Conversation
|
你的PR提交成功,感谢你对开源项目的贡献! |
There was a problem hiding this comment.
Pull request overview
This PR enhances the C10 compatibility layer’s ScalarType support by adding missing placeholder scalar wrapper types (quantized, bit-packed, and additional float formats) and wiring them into the scalar-type dispatch/mapping macros to better align with upstream enum coverage.
Changes:
- Add placeholder
c10scalar wrapper headers forqint8/qint32,quint8/quint4x2/quint2x4,bits*, and severalFloat*low-precision formats. - Extend
AT_FORALL_SCALAR_TYPES_WITH_COMPLEX_AND_QINTSto include these types (andComplexHalf) sotoString,elementSize, and type mapping specializations are generated consistently from the macro. - Remove now-redundant manual
toString/elementSizeswitch cases and scalar-type constants that are covered by the macro generation.
Reviewed changes
Copilot reviewed 11 out of 11 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| paddle/phi/api/include/compat/c10/util/quint8.h | Adds placeholder c10::quint8 wrapper type and exports to at/torch. |
| paddle/phi/api/include/compat/c10/util/quint4x2.h | Adds placeholder c10::quint4x2 wrapper type and exports to at/torch. |
| paddle/phi/api/include/compat/c10/util/quint2x4.h | Adds placeholder c10::quint2x4 wrapper type and exports to at/torch. |
| paddle/phi/api/include/compat/c10/util/qint8.h | Adds placeholder c10::qint8 wrapper type and exports to at/torch. |
| paddle/phi/api/include/compat/c10/util/qint32.h | Adds placeholder c10::qint32 wrapper type and exports to at/torch. |
| paddle/phi/api/include/compat/c10/util/Float8_e8m0fnu.h | Adds placeholder c10::Float8_e8m0fnu wrapper type and exports to at/torch. |
| paddle/phi/api/include/compat/c10/util/Float8_e5m2fnuz.h | Adds placeholder c10::Float8_e5m2fnuz wrapper type and exports to at/torch. |
| paddle/phi/api/include/compat/c10/util/Float8_e4m3fnuz.h | Adds placeholder c10::Float8_e4m3fnuz wrapper type and exports to at/torch. |
| paddle/phi/api/include/compat/c10/util/Float4_e2m1fn_x2.h | Adds placeholder c10::Float4_e2m1fn_x2 wrapper type and exports to at/torch. |
| paddle/phi/api/include/compat/c10/util/bits.h | Adds placeholder c10::bits* wrapper types and exports to at/torch. |
| paddle/phi/api/include/compat/c10/core/ScalarType.h | Updates scalar-type macro lists and derives toString/elementSize/type mappings from the expanded macro coverage. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
/re-run all-failed |
ShigureNyako
left a comment
There was a problem hiding this comment.
这版把 ScalarType 的枚举覆盖、toString/elementSize 宏化整理到一起,方向是对的;我也看了当前唯一失败的 IXUCA,日志里是 Paddle-iluvatar patch apply 失败,看起来和这次 ScalarType 改动没有直接关系。
不过我这边先不 approve,主要是 public C++ surface 还有两处没完全对齐 upstream:
- quantized wrapper 目前缺少 upstream 的
using underlying = ...,下游如果走量化 dispatch,source level 还是会断。 Float4_e2m1fn_x2的公开形状也和 upstream 不一致,至少字段名与比较运算还没补齐。
这两个点都属于兼容层的“半迁移”问题。建议先把我行内评论里的差异补齐,再继续推进;否则会出现 ScalarType/类型名已经暴露出来了,但真实 C++ API 还不能按 PyTorch 方式使用的情况。
|
|
||
| namespace c10 { | ||
|
|
||
| struct qint8 { |
There was a problem hiding this comment.
这里建议把 quantized wrapper 的 public surface 再向 upstream 对齐一下。PyTorch 的 torch/headeronly/util/qint8.h / qint32.h / quint8.h / quint4x2.h / quint2x4.h 都定义了 using underlying = ...,而且 AT_DISPATCH_CASE_QINT 这类宏会直接依赖 typename scalar_t::underlying。
当前实现只有 val_,下游如果拿这些类型走 quantized dispatch,source level 还是会再次卡住。建议至少把 underlying 别名补上;如果想完全贴近 upstream,alignas(...) 也最好一起保持一致。
There was a problem hiding this comment.
建议把 quantized wrapper 的 public surface 向 upstream 对齐,补上
using underlying = ...,并保持alignas(...)一致。
已修复。最新 commit 00f14cbd55d2 已在 qint/quint wrapper 中补齐 using underlying = ...,并把 alignas(...) 调整为与 upstream 一致;我复查后认为这个兼容性问题已经消除。
|
|
||
| namespace c10 { | ||
|
|
||
| struct Float4_e2m1fn_x2 { |
There was a problem hiding this comment.
这个占位类型最好也直接复刻 upstream 的最小 public surface。PyTorch 的 torch/headeronly/util/Float4_e2m1fn_x2.h 使用的是 uint8_t val_,并提供了 operator==/!=。
这里改成 x 且没有比较运算,会让依赖这些公开成员/比较接口的代码继续不兼容。既然这个 PR 的目标是 C++ API compatibility,建议至少把字段名和比较运算补齐到 upstream 的形状。
There was a problem hiding this comment.
建议复刻 upstream 的最小 public surface:字段名和
operator==/!=需要补齐。
已修复。最新 commit 00f14cbd55d2 已把 Float4_e2m1fn_x2 调整为 upstream 一致的 val_ 公开形状,并补齐 operator==/!=;我复查后认为这个 source-level compatibility 问题已经解决。
ShigureNyako
left a comment
There was a problem hiding this comment.
我复查了作者新增的 commit 00f14cbd55d2。
此前提出的两个阻塞点都已被回应:
- qint/quint wrapper 已补齐
using underlying = ...,并把alignas(...)调整为与 upstream 一致。 Float4_e2m1fn_x2已改成 upstream 一致的val_public surface,并补齐operator==/!=。
基于这次增量修改,我这边没有再看到新的 ABI / 调用方式 / 额外 dispatch 风险,代码 review 先通过;剩余 CI 按流程继续跑即可。
| // distributed under the License is distributed on an "AS IS" BASIS, | ||
| // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| // See the License for the specific language governing permissions and | ||
| // limitations under the License. |
There was a problem hiding this comment.
这些是不是得加 PyTorch 参考声明
已修复。我复查了作者最新 commit e75db41d17f0,现在已在 bits.h 以及这一批新增 compat wrapper / ScalarType.h 中补上 PyTorch 参考声明。这个增量只涉及声明注释补齐,不改变前面已经确认过的类型布局、ABI 或调用语义。
| // The file has been adapted from pytorch project | ||
| // Licensed under BSD-style license - | ||
| // https://github.com/pytorch/pytorch/blob/main/LICENSE |
There was a problem hiding this comment.
@ShigureNyako 单独提一个 PR 把 repo 里所有这样的声明都参考这里的改动改一下吧
收到。我会把这类 adapted from PyTorch 参考声明的补齐整理成单独 PR 跟进,避免把 repo-wide 清理和这次 ScalarType 兼容性改动混在一起。当前这个 PR 我这边保持通过,后续会单独开任务处理。
There was a problem hiding this comment.
@ShigureNyako 单独提一个 PR 把 repo 里所有这样的声明都参考这里的改动改一下吧
已处理,单独提了 repo-wide 清理 PR #78590,按这次 ScalarType.h / 新增 compat wrapper 的声明格式统一了现有 malformed adapted-from-PyTorch reference blocks。
这个跟进 PR 只改参考声明注释,不涉及 ABI、dispatch 行为或调用方式变化。
PR Category
Execute Infrastructure
PR Types
Improvements
Description
完善 ScalarType 类型,提供占位符,以对齐枚举类型
是否引起精度变化
否